Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译
机器可以学习机器学习吗?我们建议使用我们用来回答类似问题的相同标准回答这个问题:人类学习机器学习吗?我们在人类级别的机器学习介绍中自动回答麻省理工学院的期末考试。该课程是一个大型的本科课程,每个学期约有五百名学生。最近,计划合成和几乎没有学习的学习解决了大学级问题,在人类层面设定了数学和STEM课程的问题。在这项工作中,我们从期末考试中解决了与问题集不同的问题:问题更长,有多个部分,更复杂,并且跨越了更广泛的主题。我们在2017年秋季至2022年春季之间的八项麻省理工学院介绍最终考试中提供了一个新的数据集和基准,并提供了自动回答这些问题并产生新问题的代码。我们进行消融研究,比较零拍的学习与几乎没有的学习,经过思考链的提示,GPT-3在文本上进行了预训练,并且在一系列机器学习主题上进行了代码进行了微调,并发现了很少的照片学习方法表现最好。我们将数据和代码公开用于机器学习社区。
translated by 谷歌翻译
我们展示了在文本上预先培训的神经网络,并在代码上进行微调解决数学问题,通过程序合成解决了数学问题。我们将问题转化为编程任务,自动生成程序,然后从MIT的大型数学课程(单变微积分18.01,多变量计算18.02,微分方程18.03,概率和统计介绍18.05,概率和统计概要和统计概要和统计概要和统计概要和统计概要和统计概要和统计概要和统计概况概要和统计概要和统计概要和统计概率概述的大学级问题。 18.06,以及计算机科学的数学6.042)以及数学数据集的问题(在预先发生的地板,代数,计数和概率,数字理论和前进的问题上),最新数学问题的基准专门用于评估数学推理。我们探索提示生成方法,使变形金刚能够为这些主题生成问题解决程序,包括具有图的解决方案。我们在每个主题中的随机问题上生成正确的答案。我们量化了原始和转型问题之间的差距,并进行了调查以评估所产生的问题的质量和难度。这是在规模上自动解决,等级和生成大学数学课程问题的第一项工作,这代表了高等教育的里程碑。
translated by 谷歌翻译
在真实世界的机器学习应用中,可靠和安全的系统必须考虑超出标准测试设置精度的性能测量。这些其他目标包括分销(OOD)鲁棒性,预测一致性,对敌人的抵御能力,校准的不确定性估计,以及检测异常投入的能力。然而,提高这些目标的绩效通常是一种平衡行为,即今天的方法无法在不牺牲其他安全轴上的性能的情况下实现。例如,对抗性培训改善了对抗性鲁棒性,但急剧降低了其他分类器性能度量。同样,强大的数据增强和正则化技术往往提高鲁棒性,但损害异常检测,提出了对所有现有安全措施的帕累托改进是可能的。为满足这一挑战,我们设计了利用诸如分数形的图片的自然结构复杂性设计新的数据增强策略,这优于众多基线,靠近帕累托 - 最佳,并圆形提高安全措施。
translated by 谷歌翻译
我们通过使用Openai的Codex进行计划综合解决大学级概率和统计问题,一个在文本上培训并在代码上进行微调的变压器。我们从MIT 18.05的课程问题转换为概率和统计信息和哈佛的Stat110概率转换为编程任务。然后,我们执行生成的代码以获得解决方案。由于这些课程问题在概率地基础上,我们往往的目标是具有Codex生成概率的程序,以模拟大量概率依赖项来计算其解决方案。我们的方法需要提示工程将问题从其原始表格转换为明确的,贸易的表格,导致正确的程序和解决方案。为了估计将原始问题转化为此易易表单所需的工作量,我们衡量了原始和转型问题之间的相似性。我们的工作是第一个推出大学级概率和统计问题的新数据集,并使用大型语言模型的程序综合能力以可扩展方式解决这些问题。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
Neural models with an encoder-decoder framework provide a feasible solution to Question Generation (QG). However, after analyzing the model vocabulary we find that current models (both RNN-based and pre-training based) have more than 23\% inflected forms. As a result, the encoder will generate separate embeddings for the inflected forms, leading to a waste of training data and parameters. Even worse, in decoding these models are vulnerable to irrelevant noise and they suffer from high computational costs. In this paper, we propose an approach to enhance the performance of QG by fusing word transformation. Firstly, we identify the inflected forms of words from the input of encoder, and replace them with the root words, letting the encoder pay more attention to the repetitive root words. Secondly, we propose to adapt QG as a combination of the following actions in the encode-decoder framework: generating a question word, copying a word from the source sequence or generating a word transformation type. Such extension can greatly decrease the size of predicted words in the decoder as well as noise. We apply our approach to a typical RNN-based model and \textsc{UniLM} to get the improved versions. We conduct extensive experiments on SQuAD and MS MARCO datasets. The experimental results show that the improved versions can significantly outperform the corresponding baselines in terms of BLEU, ROUGE-L and METEOR as well as time cost.
translated by 谷歌翻译
In this paper, we develop an efficient multi-scale network to predict action classes in partial videos in an end-to-end manner. Unlike most existing methods with offline feature generation, our method directly takes frames as input and further models motion evolution on two different temporal scales.Therefore, we solve the complexity problems of the two stages of modeling and the problem of insufficient temporal and spatial information of a single scale. Our proposed End-to-End MultiScale Network (E2EMSNet) is composed of two scales which are named segment scale and observed global scale. The segment scale leverages temporal difference over consecutive frames for finer motion patterns by supplying 2D convolutions. For observed global scale, a Long Short-Term Memory (LSTM) is incorporated to capture motion features of observed frames. Our model provides a simple and efficient modeling framework with a small computational cost. Our E2EMSNet is evaluated on three challenging datasets: BIT, HMDB51, and UCF101. The extensive experiments demonstrate the effectiveness of our method for action prediction in videos.
translated by 谷歌翻译